Teams from Google DeepMind, the University of Washington, and others have successfully unveiled the sources of ChatGPT's training data through a 'poetry attack'. This attack method involves repeating specific vocabulary, which led ChatGPT to disclose personal information, including phone numbers and email addresses. The vulnerability was reported to OpenAI on August 30, and no further comments have been made. The research calls for environmentally-friendly AI usage, pointing out the significant energy burden that text and image generation imposes on the environment.